21 research outputs found
Recommended from our members
Oracle-Guided Design and Analysis of Learning-Based Cyber-Physical Systems
We are in world where autonomous systems, such as self-driving cars, surgical robots, robotic manipulators are becoming a reality. Such systems are considered \textit{safety-critical} since they interact with humans on a regular basis. Hence, before such systems can be integrated into our day to day life, we need to guarantee their safety. Recent success in machine learning (ML) and artificial intelligence (AI) has led to an increase in their use in real world robotic systems. For example, complex perception modules in self-driving cars and deep reinforcement learning controllers in robotic manipulators. Although powerful, they introduce an additional level of complexity when it comes to the formal analysis of autonomous systems. In this thesis, such systems are designated as Learning-Based Cyber-Physical Systems~(LB-CPS). In this thesis, we take inspiration from the Oracle-Guided Inductive Synthesis~(OGIS) paradigm to develop frameworks which can aid in achieving formal guarantees in different stages of an autonomous system design and analysis pipeline. Furthermore, we show that to guarantee the safety of LB-CPS, the design (synthesis) and analysis (verification) must consider feedback from the other. We consider five important parts of the design and analysis process and show a strong coupling among them, namely (i) Robust Control Synthesis from High Level Safety Specifications; (ii) Diagnosis and Repair of Safety Requirements for Control Synthesis; (iii) Counter-example Guided Data Augmentation for training high-accuracy ML models; (iv) Simulation-Guided Falsification and Verification against Adversarial Environments; and (v) Bridging Model and Real-World Gap. Finally, we introduce a software toolkit \verifai{} for the design and analysis of AI based systems, which was developed to provide a common formal platform to implement design and analysis frameworks for LB-CPS
A Formalization of Robustness for Deep Neural Networks
Deep neural networks have been shown to lack robustness to small input
perturbations. The process of generating the perturbations that expose the lack
of robustness of neural networks is known as adversarial input generation. This
process depends on the goals and capabilities of the adversary, In this paper,
we propose a unifying formalization of the adversarial input generation process
from a formal methods perspective. We provide a definition of robustness that
is general enough to capture different formulations. The expressiveness of our
formalization is shown by modeling and comparing a variety of adversarial
attack techniques
SOTER: A Runtime Assurance Framework for Programming Safe Robotics Systems
The recent drive towards achieving greater autonomy and intelligence in
robotics has led to high levels of complexity. Autonomous robots increasingly
depend on third party off-the-shelf components and complex machine-learning
techniques. This trend makes it challenging to provide strong design-time
certification of correct operation.
To address these challenges, we present SOTER, a robotics programming
framework with two key components: (1) a programming language for implementing
and testing high-level reactive robotics software and (2) an integrated runtime
assurance (RTA) system that helps enable the use of uncertified components,
while still providing safety guarantees. SOTER provides language primitives to
declaratively construct a RTA module consisting of an advanced,
high-performance controller (uncertified), a safe, lower-performance controller
(certified), and the desired safety specification. The framework provides a
formal guarantee that a well-formed RTA module always satisfies the safety
specification, without completely sacrificing performance by using higher
performance uncertified components whenever safe. SOTER allows the complex
robotics software stack to be constructed as a composition of RTA modules,
where each uncertified component is protected using a RTA module.
To demonstrate the efficacy of our framework, we consider a real-world
case-study of building a safe drone surveillance system. Our experiments both
in simulation and on actual drones show that the SOTER-enabled RTA ensures the
safety of the system, including when untrusted third-party components have bugs
or deviate from the desired behavior
Counterexample-Guided Data Augmentation
We present a novel framework for augmenting data sets for machine learning
based on counterexamples. Counterexamples are misclassified examples that have
important properties for retraining and improving the model. Key components of
our framework include a counterexample generator, which produces data items
that are misclassified by the model and error tables, a novel data structure
that stores information pertaining to misclassifications. Error tables can be
used to explain the model's vulnerabilities and are used to efficiently
generate counterexamples for augmentation. We show the efficacy of the proposed
framework by comparing it to classical augmentation techniques on a case study
of object detection in autonomous driving based on deep neural networks
Verifying Controllers Against Adversarial Examples with Bayesian Optimization
Recent successes in reinforcement learning have lead to the development of
complex controllers for real-world robots. As these robots are deployed in
safety-critical applications and interact with humans, it becomes critical to
ensure safety in order to avoid causing harm. A first step in this direction is
to test the controllers in simulation. To be able to do this, we need to
capture what we mean by safety and then efficiently search the space of all
behaviors to see if they are safe. In this paper, we present an active-testing
framework based on Bayesian Optimization. We specify safety constraints using
logic and exploit structure in the problem in order to test the system for
adversarial counter examples that violate the safety specifications. These
specifications are defined as complex boolean combinations of smooth functions
on the trajectories and, unlike reward functions in reinforcement learning, are
expressive and impose hard constraints on the system. In our framework, we
exploit regularity assumptions on individual functions in form of a Gaussian
Process (GP) prior. We combine these into a coherent optimization framework
using problem structure. The resulting algorithm is able to provably verify
complex safety specifications or alternatively find counter examples.
Experimental results show that the proposed method is able to find adversarial
examples quickly.Comment: Proc. of the IEEE International Conference on Robotics and
Automation, 201
Diagnosis and Repair for Synthesis from Signal Temporal Logic Specifications
We address the problem of diagnosing and repairing specifications for hybrid
systems formalized in signal temporal logic (STL). Our focus is on the setting
of automatic synthesis of controllers in a model predictive control (MPC)
framework. We build on recent approaches that reduce the controller synthesis
problem to solving one or more mixed integer linear programs (MILPs), where
infeasibility of a MILP usually indicates unrealizability of the controller
synthesis problem. Given an infeasible STL synthesis problem, we present
algorithms that provide feedback on the reasons for unrealizability, and
suggestions for making it realizable. Our algorithms are sound and complete,
i.e., they provide a correct diagnosis, and always terminate with a non-trivial
specification that is feasible using the chosen synthesis method, when such a
solution exists. We demonstrate the effectiveness of our approach on the
synthesis of controllers for various cyber-physical systems, including an
autonomous driving application and an aircraft electric power system